50 research outputs found

    Flow cytometry for enrichment and titration in massively parallel DNA sequencing

    Get PDF
    Massively parallel DNA sequencing is revolutionizing genomics research throughout the life sciences. However, the reagent costs and labor requirements in current sequencing protocols are still substantial, although improvements are continuously being made. Here, we demonstrate an effective alternative to existing sample titration protocols for the Roche/454 system using Fluorescence Activated Cell Sorting (FACS) technology to determine the optimal DNA-to-bead ratio prior to large-scale sequencing. Our method, which eliminates the need for the costly pilot sequencing of samples during titration is capable of rapidly providing accurate DNA-to-bead ratios that are not biased by the quantification and sedimentation steps included in current protocols. Moreover, we demonstrate that FACS sorting can be readily used to highly enrich fractions of beads carrying template DNA, with near total elimination of empty beads and no downstream sacrifice of DNA sequencing quality. Automated enrichment by FACS is a simple approach to obtain pure samples for bead-based sequencing systems, and offers an efficient, low-cost alternative to current enrichment protocols

    Translational Database Selection and Multiplexed Sequence Capture for Up Front Filtering of Reliable Breast Cancer Biomarker Candidates

    Get PDF
    Biomarker identification is of utmost importance for the development of novel diagnostics and therapeutics. Here we make use of a translational database selection strategy, utilizing data from the Human Protein Atlas (HPA) on differentially expressed protein patterns in healthy and breast cancer tissues as a means to filter out potential biomarkers for underlying genetic causatives of the disease. DNA was isolated from ten breast cancer biopsies, and the protein coding and flanking non-coding genomic regions corresponding to the selected proteins were extracted in a multiplexed format from the samples using a single DNA sequence capture array. Deep sequencing revealed an even enrichment of the multiplexed samples and a great variation of genetic alterations in the tumors of the sampled individuals. Benefiting from the upstream filtering method, the final set of biomarker candidates could be completely verified through bidirectional Sanger sequencing, revealing a 40 percent false positive rate despite high read coverage. Of the variants encountered in translated regions, nine novel non-synonymous variations were identified and verified, two of which were present in more than one of the ten tumor samples

    FörĂ€ndringar över tid i Rigabuktens kvalitativa och kvantitativa nĂ€ringsvĂ€var (1981–2014)

    No full text
    Network research offers a framework to investigate how ecological food web structure and function varies over time, which allows addressing and anticipating changes in ecosystems. This understanding is of vital importance for shaping conservation efforts and ecosystem management in light of anthropogenic change. Our current understanding of how resolved food webs vary through time comes primarily from binary (presence/absence) networks. These networks ignore the strength of the trophic interactions. In contrast, weighted networks account for interaction strength (energy fluxes), and hence can reveal more subtle fluctuations in community structure through changes in the biomasses of species, and their fluxes, rather than just through fluctuations in species number and identity. Using a time series of food webs constructed with long-term biomass data and highly resolved information on species trophic relationships, combined with a bioenergetic modeling approach, allowed the comparison how unweighted (topology-based) and weighted (flux-based) food web approaches differ with regard to modularity through time. The stability of food webs is thought to be enhanced by greater modularity, with respect to the spread of disturbances in a network, as said perturbations may be contained within the modules. Looking at modularity also facilitates the assessment of species functional roles through time by quantifying their position in the network related to modularity. The analyses revealed that the link-weighted approach resulted in a more refined partitioning of network community structure (modularity) and how it changed over time. The weighted networks also showed more subtle changes in species roles, for example changes in how some species connect modules, giving a better understanding of how the functioning of the network changed over time. For example, the weighted food webs clearly captured a collapse of benthos in the mid-90s through its impact on modularity, which was hardly reflected in the unweighted version. The results outlined in this thesis further support previous findings that the inclusion of flux-based information and link-weighted food web network analyses is vital to gain a more complete understanding of how ecological networks change through time with regard to their structure and functioning

    Methods for Analyzing Genomes

    No full text
    The human genome reference sequence has given us a two‐dimensional blueprint of our inherited code of life, but we need to employ modern‐day technology to expand our knowledge into a third dimension. Inter‐individual and intra‐individual variation has been shown to be larger than anticipated, and the mode of genetic regulation more complex. Therefore, the methods that were once used to explain our fundamental constitution are now used to decipher our differences. Over the past four years, throughput from DNA‐sequencing platforms has increased a thousand‐fold, bearing evidence of a rapid development in the field of methods used to study DNA and the genomes it constitutes. The work presented in this thesis has been carried out as an integrated part of this technological evolution, contributing to it, and applying the resulting solutions to answer difficult biological questions. Papers I and II describe a novel approach for microarray readout based on immobilization of magnetic particles, applicable to diagnostics. As benchmarked on canine mitochondrial DNA, and human genomic DNA from individuals with cystic fibrosis, it allows for visual interpretation of genotyping results without the use of machines or expensive equipment. Paper III outlines an automated and cost‐efficient method for enrichment and titration of clonally amplified DNA‐libraries on beads. The method uses fluorescent labeling and a flow‐cytometer to separate DNA‐beads from empty ones. At the same time the fraction of either bead type is recorded, and a titration curve can be generated. In paper IV we combined the highly discriminating multiplex genotyping of trinucleotide threading with the digital readout made possible by massively parallel sequencing. From this we were able to characterize the allelic distribution of 88 obesity related SNPs in a population of 462 individuals enrolled at a childhood obesity center. Paper V employs the throughput of present day DNA sequencingas it investigates deep into sun‐exposed skin to find clues on the effects of sunlight during the course of a summer holiday. The tumor suppressor p53 gene was targeted, only to find that despite its well‐documented involvement in the disease progression of cancers, an estimated 35,000 novel sun‐induced persistent p53 mutations are added and phenotypically tolerated in the skin of every individual every year. The last paper, VI, describes a novel approach for finding breast cancer biomarkers. In this translational study we used differential protein expression profiles and sequence capture to select and enrich for 52 candidate genes in DNA extracted from ten tumors. Two of the genes turned out to harbor protein‐altering mutations in multiple individuals

    Methods for Analyzing Genomes

    No full text
    The human genome reference sequence has given us a two‐dimensional blueprint of our inherited code of life, but we need to employ modern‐day technology to expand our knowledge into a third dimension. Inter‐individual and intra‐individual variation has been shown to be larger than anticipated, and the mode of genetic regulation more complex. Therefore, the methods that were once used to explain our fundamental constitution are now used to decipher our differences. Over the past four years, throughput from DNA‐sequencing platforms has increased a thousand‐fold, bearing evidence of a rapid development in the field of methods used to study DNA and the genomes it constitutes. The work presented in this thesis has been carried out as an integrated part of this technological evolution, contributing to it, and applying the resulting solutions to answer difficult biological questions. Papers I and II describe a novel approach for microarray readout based on immobilization of magnetic particles, applicable to diagnostics. As benchmarked on canine mitochondrial DNA, and human genomic DNA from individuals with cystic fibrosis, it allows for visual interpretation of genotyping results without the use of machines or expensive equipment. Paper III outlines an automated and cost‐efficient method for enrichment and titration of clonally amplified DNA‐libraries on beads. The method uses fluorescent labeling and a flow‐cytometer to separate DNA‐beads from empty ones. At the same time the fraction of either bead type is recorded, and a titration curve can be generated. In paper IV we combined the highly discriminating multiplex genotyping of trinucleotide threading with the digital readout made possible by massively parallel sequencing. From this we were able to characterize the allelic distribution of 88 obesity related SNPs in a population of 462 individuals enrolled at a childhood obesity center. Paper V employs the throughput of present day DNA sequencingas it investigates deep into sun‐exposed skin to find clues on the effects of sunlight during the course of a summer holiday. The tumor suppressor p53 gene was targeted, only to find that despite its well‐documented involvement in the disease progression of cancers, an estimated 35,000 novel sun‐induced persistent p53 mutations are added and phenotypically tolerated in the skin of every individual every year. The last paper, VI, describes a novel approach for finding breast cancer biomarkers. In this translational study we used differential protein expression profiles and sequence capture to select and enrich for 52 candidate genes in DNA extracted from ten tumors. Two of the genes turned out to harbor protein‐altering mutations in multiple individuals

    Efficient estate management : A descriptive comparison of good examples practically and applicable theories

    No full text
      Bachelor thesis within Business Administration Title: Effective estate management Author: Joacim StĂ„hl Patrik Styrbjörn Tutor: Gunnar Wramsby Date: 2008-06-05 Subject terms: Estate management, efficiency, key figure, good examples, manager, real estate   Abstract: Background: Concerning the global economy and the picture given in real estate business that efficiency and profitability are needed to survive is it intresting to study and search for good example to reach an effective and profitable estate management. Purpose: The purpose of the thesis is to describe the work of real estate concern to reach efficiency and profitability from the owner/manager perspective. Method: Through qualitative interviews with eight respondents in four different real estate concerns describe good examples on effective estate management and with this fulfil the purpose of this essay. Two interviews per company were made, one at the company and one over phone to reach better security concerning the answers. Theory: The theory is divided in to four mainparts were the first two explains estate management and financial control. Thereafter comes an impression concerning efficiency and how efficiency is secured in estate mangement from a theoratical perspective. Empiri: This part of the essay is about the interviews made and through them show how real estate concern works with efficiency by focusing on good examples in which the companys find themself efficient. Analysis: Comparing the theories and the empirical research to analys and try to describe efficient estate management. Conclusions: Efficient estate management is about controlling the costs and especially about the opreating costs. Important to reach operative efficiency is therefore to continuosly follow up the costs and to do readings every month. To compose and continuosly work with budget and action plans for each estate is important in order to reach efficiency in the management of estates.  Kandidatuppsats inom företagsekonomi Titel: Effektiv fastighetsförvaltning Författare: Joacim StĂ„hl Patrik Styrbjörn Handledare: Gunnar Wramsby Datum: 2008-06-05 Ämnesord: Fastighetsförvaltning, effektivitet, nyckeltal, goda exempel, förvaltare, fastigheter   Sammanfattning: Bakgrund: Med tanke pĂ„ lĂ€get i den globala ekonomin och den bild som givits inom fastighetsbranschen att effektivitet och lönsamhet som krĂ€vs för att överleva Ă€r det intressant att undersöka och söka efter goda exempel för att uppnĂ„ en effektiv och lönsam förvaltning av fastigheter. Syfte: Syftet med uppsatsen Ă€r att beskriva fastighetsbolags arbete med att uppnĂ„ effektivitet och lönsamhet ur ett perspektiv sett frĂ„n Ă€gare/förvaltare. Metod: Genom kvalitativa intervjuer med Ă„tta respondenter inom fyra olika fastighetsförvaltande bolag beskriva goda exempel pĂ„ effektiv fastighetsförvaltning och dĂ€rmed uppfylla syftet med uppsatsen. TvĂ„ intervjuer skedde per bolag, varav en hos bolaget och en per telefon för ökad sĂ€kerhet angĂ„ende erhĂ„llna svar. Teori: Teorin Ă€r uppdelad i fyra huvudkapitel dĂ€r de tvĂ„ första förklarar begreppen fastighetsförvaltning och ekonomistyrning. DĂ€refter sker en fördjupning mot effektivitet och hur effektivitet sĂ€kerstĂ€lls inom fastighetsförvaltning ur ett teoretiskt perspektiv. Empiri: Denna del av uppsatsen bygger pĂ„ intervjuer för att visa pĂ„ hur fastighetsbolag arbetar med effektivisering genom att fokusera pĂ„ goda exempel dĂ€r respondenterna uppfattar sig som effektiva. Analys: JĂ€mförelse mellan teorier och och den empiriska undersökning för analys och försök att beskriva effektiv fastighetsförvaltning. Slutsats: Effektiv fastighetsförvaltning handlar till stor del om kostnadskontroll och i synnerhet av driftskostnaderna. Ett aktivt arbete med tĂ€ta avlĂ€sningar och uppföljningar Ă€r viktigt för att uppnĂ„ effektivitet operativt. Att kontinuerligt arbeta med budget och handlingsplaner för varje fastighet Ă€r av stor vikt för att nĂ„ effektivitet i förvaltningen av fastigheter

    Fluorescence-Activated Cell Sorting of Specific Affibody-Displaying Staphylococci

    No full text
    Efficient enrichment of staphylococcal cells displaying specific heterologous affinity ligands on their cell surfaces was demonstrated by using fluorescence-activated cell sorting. Using bacterial surface display of peptide or protein libraries for the purpose of combinatorial protein engineering has previously been investigated by using gram-negative bacteria. Here, the potential for using a gram-positive bacterium was evaluated by employing the well-established surface expression system for Staphylococcus carnosus. Staphylococcus aureus protein A domains with binding specificity to immunoglobulin G or engineered specificity for the G protein of human respiratory syncytial virus were expressed as surface display on S. carnosus cells. The surface accessibility and retained binding specificity of expressed proteins were demonstrated in whole-cell enzyme and flow cytometry assays. Also, affibody-expressing target cells could be sorted essentially quantitatively from a moderate excess of background cells in a single step by using a high-stringency sorting mode. Furthermore, in a simulated library selection experiment, a more-than-25,000-fold enrichment of target cells could be achieved through only two rounds of cell sorting and regrowth. The results obtained indicate that staphylococcal surface display of affibody libraries combined with fluoresence-activated cell sorting might indeed constitute an attractive alternative to existing technology platforms for affinity-based selections

    Lokatt: a hybrid DNA nanopore basecaller with an explicit duration hidden Markov model and a residual LSTM network

    No full text
    Abstract Background Basecalling long DNA sequences is a crucial step in nanopore-based DNA sequencing protocols. In recent years, the CTC-RNN model has become the leading basecalling model, supplanting preceding hidden Markov models (HMMs) that relied on pre-segmenting ion current measurements. However, the CTC-RNN model operates independently of prior biological and physical insights. Results We present a novel basecaller named Lokatt: explicit duration Markov model and residual-LSTM network. It leverages an explicit duration HMM (EDHMM) designed to model the nanopore sequencing processes. Trained on a newly generated library with methylation-free Ecoli samples and MinION R9.4.1 chemistry, the Lokatt basecaller achieves basecalling performances with a median single read identity score of 0.930, a genome coverage ratio of 99.750%, on par with existing state-of-the-art structure when trained on the same datasets. Conclusion Our research underlines the potential of incorporating prior knowledge into the basecalling processes, particularly through integrating HMMs and recurrent neural networks. The Lokatt basecaller showcases the efficacy of a hybrid approach, emphasizing its capacity to achieve high-quality basecalling performance while accommodating the nuances of nanopore sequencing. These outcomes pave the way for advanced basecalling methodologies, with potential implications for enhancing the accuracy and efficiency of nanopore-based DNA sequencing protocols
    corecore